maximum likelihood estimator

Terms from Statistics for HCI: Making Sense of Quantitative Data

In statistics we often have some form of measurement M and want to estimate a parameter X, indeed this is the job of statistics. One way to estimate X is to use the value which gives the maximim value of the likelihood of the measurement – this is the maximum likelihood estimator (MLE). Recalling that the likelihood is equal the conditional probability of M given X, the MLE of X is the value of X that maximises P(M|X).
The MLE has many nice properties and many common statistical estimators are based on MLEs, however there are also times when it gives poor estimators, for example, when you have a mixed distribution, say people from very different backgrounds so that the likelihood has more than one mode.
The MLE assumes no knowledge of the parameter X apart from that derived from the measurement M. In cases when we know more about X (e.g. a relevant base rate), it maybe better to use it as a prior for Bayesian methods.

Used in Chap. 13: page 156

Also used in hcistats2e: Chap. 10: page 119

Used in glossary entries: base rate, Bayesian reasoning, conditional probability, likelihood, mode, prior distribution, the job of statistics